skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Koenecke, Allison"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 23, 2026
  2. Free, publicly-accessible full text available April 25, 2026
  3. Trained and optimized for typical and fluent speech, speech AI works poorly for people with speech diversities, often interrupting them and misinterpreting their speech. The increasing deployment of speech AI in automated phone menus, AI-conducted job interviews, and everyday devices poses tangible risks to people with speech diversities. To mitigate these risks, this workshop aims to build a multidisciplinary coalition and set the research agenda for fair and accessible speech AI. Bringing together a broad group of academics and practitioners with diverse perspectives, including HCI, AI, and other relevant fields such as disability studies, speech language pathology, and law, this workshop will establish a shared understanding of the technical challenges for fair and accessible speech AI, as well as its ramifications in design, user experience, policy, and society. In addition, the workshop will invite and highlight first-person accounts from people with speech diversities, facilitating direct dialogues and collaboration between speech AI developers and the impacted communities. The key outcomes of this workshop include a summary paper that synthesizes our learnings and outlines the roadmap for improving speech AI for people with speech diversities, as well as a community of scholars, practitioners, activists, and policy makers interested in driving progress in this domain. 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  4. The language used by US courtroom actors in criminal trials has long been studied for biases. However, systematic studies for bias in high-stakes court trials have been difficult, due to the nuanced nature of bias and the legal expertise required. Large language models offer the possibility to automate annotation. But validating the computational approach requires both an understanding of how automated methods fit in existing annotation workflows and what they really offer. We present a case study of adding a computational model to a complex and high-stakes problem: identifying gender-biased language in US capital trials for women defendants.Our team of experienced death-penalty lawyers and NLP technologists pursue a three-phase study: first annotating manually, then training and evaluating computational models, and finally comparing expert annotations to model predictions. Unlike many typical NLP tasks, annotating for gender bias in months-long capital trials is complicated, with many individual judgment calls. Contrary to standard arguments for automation that are based on efficiency and scalability, legal experts find the computational models most useful in providing opportunities to reflect on their own bias in annotation and to build consensus on annotation rules. This experience suggests that seeking to replace experts with computational models for complex annotation is both unrealistic and undesirable. Rather, computational models offer valuable opportunities to assist the legal experts in annotation-based studies. 
    more » « less
  5. Algorithms provide powerful tools for detecting and dissecting human bias and error. Here, we develop machine learning methods to to analyze how humans err in a particular high-stakes task: image interpretation. We leverage a unique dataset of 16,135,392 human predictions of whether a neighborhood voted for Donald Trump or Joe Biden in the 2020 US election, based on a Google Street View image. We show that by training a machine learning estimator of the Bayes optimal decision for each image, we can provide an actionable decomposition of human error into bias, variance, and noise terms, and further identify specific features (like pickup trucks) which lead humans astray. Our methods can be applied to ensure that human-in-the-loop decision-making is accurate and fair and are also applicable to black-box algorithmic systems. 
    more » « less